Research Synthesis Methods
Top medRxiv preprints most likely to be published in this journal, ranked by match strength.
Show abstract
ObjectivesTo quantify the amount and certainty of evidence in Cochrane systematic reviews of interventions, and to describe how this evidence has evolved over time. DesignLarge-scale meta-research study Data sourceCochrane Database of Systematic Reviews (search date April 8, 2025) Eligibility criteriaCochrane systematic reviews assessing interventions reporting "Summary of findings" tables. Data extractionData were automatically extracted using web scraping and a large language model, with q...
Show abstract
BackgroundSystematic reviews are important for informing public health policies and program selection; however, they are time- and resource-intensive. Artificial intelligence (AI) offers a solution to reduce these labour-intensive requirements for various aspects of systematic review production, including data extraction. To date, there is limited robust evidence evaluating the accuracy and efficiency of AI for data extraction. This study within a review (SWAR) aimed to determine whether human d...
Show abstract
IntroductionSeveral filters are routinely used to remove animal or nonhuman records in Ovid Embase, despite there being no performance data for them. The filters take different approaches in design. ObjectiveTo understand and compare the impact of 11 filters to remove animal or nonhuman records in Ovid Embase. To understand the indexing of relevant subject headings in Embase. MethodsTo assess filter performance, we screened and categorised 3,000 records as should be removed or should be reta...
Show abstract
BackgroundIndividual participant data (IPD) meta-analyses obtain, harmonise and synthesise the raw individual-level data from multiple studies, and are increasingly important in an era of data sharing and personalised medicine to inform clinical practice and policy. Objectives(1) Describe the landscape of IPD meta-analysis of randomised trials over time; (2) establish current practice in design, conduct, analysis and reporting for pairwise IPD meta-analysis; and (3) derive recommendations to i...
Show abstract
Systematic reviews are used in academia, biotechnology, pharmaceutical companies and government to synthesise and appraise large numbers of publications. The current (largely manual) workflow takes an average of 9-18 months1, at a cost of $100,000+ per review2. We built a platform, ScholaraAI, that leverages artificial intelligence to cut this to < 0.1% of the time, without compromising quality. ScholaraAI facilitates end-to-end systematic reviews; search, screening, data extraction, and analysi...
Show abstract
BackgroundThe ability of large language models (LLMs) to work collaboratively and screen studies in a systematic review (SR) is under-explored. Hence, we aimed to evaluate the effectiveness of LLMs in automating the process of screening in systematic reviews. MethodsThis is an observational study which included labeled data (title and abstracts) for five SRs. Originally, two reviewers screened the citations independently for eligibility. A third reviewer cross-checked each citation for quality ...
Show abstract
BackgroundThe number of problematic randomized clinical trials (RCTs) has risen sharply in recent decades, posing serious challenges to the integrity of the healthcare evidence ecosystem. ObjectiveTo investigate whether retraction of problematic RCTs could reduce evidence contamination. DesignRetrospective cohort study SettingA secondary analysis of the VITALITY Study database. Participants1,330 retracted RCTs with 847 systematic reviews. MeasurementsThe difference in the median number (and...
Show abstract
BackgroundThe exponential growth of Mendelian randomization (MR) literature has created challenges for systematically organising and synthesising evidence, with key information fragmented across heterogeneous publications. We present MR-KG, a knowledge graph resource using large language models (LLMs) to systematically extract and structure published MR evidence at scale. MethodsWe evaluated eight OpenAI and local LLMs for extracting structured information from MR study abstracts. Two reviewers...
Show abstract
ObjectivesCase reports and case series comprise a significant portion of the biomedical literature, yet unlike case reports, the National Library of Medicine does not index case series as a Publication Type. This hurts clinicians and researchers ability to retrieve, identify and analyze evidence from this type of study. Materials and MethodsPubMed articles mentioning "case series" in title or abstract were characterized to learn what are considered to be case series by the authors themselves. W...
Show abstract
Hybrid controlled trials (HCTs) incorporate real-world data into randomized controlled trials (RCTs) by augmenting the internal control arm with patients receiving the same treatment in routine care. Beyond increasing power, HCTs may improve recruitment by supporting unequal randomization ratios that increase patient access to experimental treatments. However, HCT validity is threatened by bias from unmeasured confounding due to lack of randomization of external controls, leading to outcome non-...
Show abstract
IntroductionRandomised controlled trials (RCTs) investigate the safety and efficacy of interventions. It has become clear however that some RCTs include fabricated data. The INSPECT-SR tool assesses the trustworthiness of RCTs in systematic reviews of healthcare-related interventions. However, where individual participant data (IPD) can be obtained, a more thorough assessment of trustworthiness is possible. Consequently, INSPECT-SR recommends obtaining IPD to resolve uncertainties, though there ...
Show abstract
ImportanceOver several decades there have been extensive debates on the use and misuse of statistical significance. It would be important to capture what P-values are reported in biomedical papers and whether their patterns have changed over time. ObjectiveTo quantify the reporting dynamics on P-values in biomedical articles in PubMed and PubMed Central(PMC) database over a 35-year period (1990-2025). DesignData were retrieved from the National Library of Medicine via PubMed and PubMed Central...
Show abstract
BackgroundSystematic reviews (SRs) are essential for evidence-based medicine but require extensive time and resources for abstract screening. Large language models (LLMs) offer potential for automating this process, yet concerns about data privacy, intellectual property protection, and reproducibility limit the use of cloud-based solutions in research settings. ObjectiveTo evaluate the performance of a locally deployed 20-billion parameter LLM for automated abstract screening in systematic revi...
Show abstract
Large language models (LLMs) are increasingly transforming scientific workflows, yet their application to rigorous evidence synthesis remains underexplored. Through the execution of a single Python script, we present a fully automated pipeline leveraging the Claude API to generate systematic reviews from literature search through manuscript completion without human intervention. Our pipeline processes hundreds of papers through iterative API calls for inclusion evaluation, information extraction...
Show abstract
BackgroundJournals may respond to integrity concerns by publishing an editorial response (editorial notice, expression of concern (EoC) or retraction). We investigated whether the type of editorial response affected citation rates. MethodsWe obtained citations for 172 randomised controlled trials (RCTs) with integrity concerns (41 had editorial notices, 38 EoCs and 23 retractions) and control RCTs from the same journal and year. Monthly citation rates up to 60 months before and after editorial ...
Show abstract
Citation screening in systematic review is time-consuming. Machine learning can help semi-automate it but faces obstacles. Each systematic review is a new dataset without initial annotations. Extreme class imbalance against irrelevant studies makes it difficult to select a good subset of samples to train a classifier. The rigid requirement of a (near) total recall of relevant studies demands a careful trade-off between accuracy and recall. This paper pilots a weak classifier ensemble approach to...
Show abstract
ObjectiveOur goal is to unify the 72 biomedical publication types and study designs (collectively, PTs) into a single rubric and hierarchy. Materials and MethodsThis is carried out in a data-driven manner by computing pairwise similarities of each PT against all others to form a similarity matrix. By performing hierarchical clustering we place each PT in a specific category and collect these into broader categories. ResultsSpearman correlations among PT pairs ranged from strongly negative to s...
Show abstract
BackgroundRoutinely collected health data are increasingly used to generate real-world evidence for therapeutic decision-making. Yet, stakeholders, including clinicians, pharmaceutical industry representatives, patient advocacy groups, and statisticians, prioritize different aspects of data quality, analysis, and interpretation. Without explicit consideration of these perspectives, analyses risk being fragmented, misaligned with end-user needs, or lacking transparency. MethodsWe developed a sta...
Show abstract
BackgroundIn pharmacoepidemiological studies, days of treatment (DoT) duration associated with individual electronic drug utilization records (DUR) are usually missing. Researcher-defined duration (RDD) calculation approaches, as opposed to data-driven approaches, can be used to estimate DoT based on the specific choices and assumptions made by investigators. These are usually underreported or even undocumented. We aimed to develop a framework for the standardization of terminology, formulas, im...
Show abstract
ObjectiveTo address the inefficiency, subjectivity, and high expertise barrier of traditional epidemiological causal inference, this study designed, developed, and validated an AI-powered agent (EpiCausalX Agent) to automate the end-to-end workflow. It integrates cross-database literature retrieval, intelligent causal reasoning, and Directed Acyclic Graph (DAG) visualization to provide a reliable, accessible tool for researchers. Materials and MethodsBuilt on the LangChain 1.0 framework with a ...